81 research outputs found

    Fast human activity recognition in lifelogging

    Get PDF
    This paper addresses the problem of fast Human Activity Recognition (HAR) in visual lifelogging. We identify the importance of visual features related to HAR and we specifically evaluate the HAR discrimination potential of Colour Histograms and Histogram of Oriented Gradients. In our evaluation we show that colour can be a low-cost and effective means of low-cost HAR when performing single-user classification. It is also noted that, while much more efficient, global image descriptors perform as well or better than local descriptors in our HAR experiments. We believe that both of these findings are due to the fact that a user’s lifelog is rich in reoccurring scenes and environments

    Semantic Image Segmentation Using Visible and Near-Infrared Channels

    Get PDF
    Recent progress in computational photography has shown that we can acquire physical information beyond visible (RGB) image representations. In particular, we can acquire near-infrared (NIR) cues with only slight modification to any standard digital camera. In this paper, we study whether this extra channel can improve semantic image segmentation. Based on a state-of-the-art segmentation framework and a novel manually segmented image database that contains 4-channel images (RGB+NIR), we study how to best incorporate the specific characteristics of the NIR response. We show that it leads to improved performances for 7 classes out of 10 in the proposed dataset and discuss the results with respect to the physical properties of the NIR response

    Bag-of-Colors for Biomedical Document Image Classification

    Get PDF
    The number of biomedical publications has increased noticeably in the last 30 years. Clinicians and medical researchers regularly have unmet information needs but require more time for searching than is usually available to find publications relevant to a clinical situation. The techniques described in this article are used to classify images from the biomedical open access literature into categories, which can potentially reduce the search time. Only the visual information of the images is used to classify images based on a benchmark database of ImageCLEF 2011 created for the task of image classification and image retrieval. We evaluate particularly the importance of color in addition to the frequently used texture and grey level features. Results show that bags–of–colors in combination with the Scale Invariant Feature Transform (SIFT) provide an image representation allowing to improve the classification quality. Accuracy improved from 69.75% of the best system in ImageCLEF 2011 using visual information, only, to 72.5% of the system described in this paper. The results highlight the importance of color for the classification of biomedical images

    Predicting Actions from Static Scenes

    Get PDF
    International audienceHuman actions naturally co-occur with scenes. In this work we aim to discover action-scene correlation for a large number of scene categories and to use such correlation for action prediction. Towards this goal, we collect a new SUN Action dataset with manual annotations of typical human actions for 397 scenes. We next discover action-scene associations and demonstrate that scene categories can be well identified from their associated actions. Using discovered associations, we address a new task of predicting human actions for images of static scenes. We evaluate prediction of 23 and 38 action classes for images of indoor and outdoor scenes respectively and show promising results. We also propose a new application of geo-localized action prediction and demonstrate ability of our method to automatically answer queries such as "Where is a good place for a picnic?" or "Can I cycle along this path?"

    Invariant color descriptors for efficient object recognition

    Get PDF
    Koen van de Sande onderzocht hoe computerprogramma’s snel en accuraat objecten kunnen herkennen in foto’s en video’s. Om te bepalen welke objecten (zoals mensen, planten en dieren) in het beeldmateriaal voorkomen maakte hij gebruik van de meest geavanceerde zoektechnieken die overweg kunnen met verschillende opname-omstandigheden (zoals bijvoorbeeld zij- en vooraangezichten). De locatie van een object in het beeld bepaalde Van de Sande door gebruik te maken van zeer snelle algoritmes. Door de toenemende hoeveelheid videobestanden is er een groeiende behoefte aan zoekmachines die het zoeken naar videofragmenten mogelijk maken. Door de objectherkenningssoftware wordt het volgens Van de Sande makkelijker om grote databases zoals YouTube, Flickr en Facebook automatisch te doorzoeken op bijvoorbeeld gezichten, auto’s en geweld

    Empowering Visual Categorization with the GPU

    No full text
    corecore